高参数优化(HPO)对于机器学习算法以实现令人满意的性能至关重要,其进度已被相关基准增强。尽管如此,现有的努力在基准基准的方面都专注于HPO,同时忽略了联合学习(FL),这是从分散数据中进行协作学习模型的有希望的范式。在本文中,我们首先从各个方面确定了FL算法的HPO唯一性。由于这种唯一性,现有的HPO基准不再满足比较FL设置中HPO方法的需求。为了促进HPO在FL环境中的研究,我们提出并实施了一个基准套件FedHPO-B,该基准套件融合了全面的FL任务,实现了有效的功能评估,并简化了持续的扩展。我们还基于FEDHPO-B进行了广泛的实验,以基准一些HPO方法。我们在https://github.com/alibaba/federatedscope/tree/master/master/master/benchmark/fedhpob上开放Source Fedhpo-b。
translated by 谷歌翻译
使用和部署不同本地模型的个性化联合学习(PFL),由于其在处理佛罗里达州客户的统计异质性方面的成功,近年来引起了人们的关注。但是,对不同PFL方法的标准化评估和系统分析仍然是一个挑战。首先,高度多样化的数据集,FL仿真设置和PFL实现可以防止对PFL方法的快速和公平比较。其次,在各种实践场景中,PFL方法的有效性和鲁棒性不足,例如新客户的概括和资源有限的客户参与。最后,当前的PFL文献在采用的评估和消融方案中有所不同。为了应对这些挑战,我们提出了第一个全面的PFL基准PFL基准,以促进快速,可重现,标准化和彻底的PFL评估。所提出的基准测试包含具有统一数据分区和现实异质设置的不同应用程序域中的10多个数据集;一个模块化且易于扩展的PFL代码库,具有20多个竞争性PFL基线实现;以及在集装环境下进行的系统评估,以概括,公平,系统开销和收敛性。我们强调了最先进的PFL方法的好处和潜力,并希望PFL板台实现了进一步的PFL研究和广泛的应用,否则由于缺乏专用的基准,这将是困难的。该代码在https://github.com/alibaba/federatedscope/tree/master/master/benchmark/pfl-bench上发布。
translated by 谷歌翻译
为了调查现实世界中联邦学习的异质性,我们将经典的联合学习概括为联合的异性任务学习,这强调了参与者在数据分布和学习任务方面的联盟学习中的不一致性。我们还提出了B-FHTL,这是一种联合的杂项任务学习基准,该基准包括模拟数据集,FL协议和统一的评估机制。 B-FHTL数据集包含三个精心设计的联合学习任务,异质性增加。每个任务都使用不同的非IID数据和学习任务模拟客户端。为了确保不同的FL算法之间的公平比较,B-FHTL通过提供高级API来避免隐私泄漏,在整个FL协议中构建,并预设跨越不同的学习任务的最常见评估指标,例如回归,分类,文本,文本,文本此外,我们还比较了B-FHTL中联合多任务学习,联合个性化和联合元学习领域的FL算法,并突出了联盟异质任务学习的异质性和困难的影响。我们的基准测试,包括联合数据集,协议,评估机制和初步实验,可在https://github.com/alibaba/federatedscope/tree/master/master/master/benchmark/b-fhtl上开放。
translated by 谷歌翻译
联合学习(FL)的令人难以置信的发展使计算机视觉和自然语言处理领域的各种任务受益,而现有的TFF和FATE等现有框架使在现实应用程序中的部署变得容易。但是,即使图形数据很普遍,联合图形学习(FGL)由于其独特的特征和要求而没有得到很好的支持。缺乏与FGL相关的框架增加了完成可再现研究和在现实世界应用中部署的努力。在本文中,我们首先讨论了创建易于使用的FGL软件包的挑战,因此提出了我们实施的FederatedScope-GNN(FS-G)的包裹,该软件包提供了(1)统一的模块化视图并表达FGL算法; (2)用于开箱即用的FGL功能的综合数据和模型; (3)有效的模型自动调整组件; (4)现成的隐私攻击和防御能力。我们通过进行广泛的实验来验证FS-G的有效性,该实验同时获得了许多有关FGL的宝贵见解。此外,我们采用FS-G在现实世界中的电子商务方案中为FGL应用程序提供服务,在该场景中获得的改进表明了巨大的潜在业务利益。我们在https://github.com/alibaba/federatedscope上公开发布FS-G,作为FederatedScope的子模型,以促进FGL的研究,并启用由于缺乏专用包装而无法无视的广泛应用。
translated by 谷歌翻译
尽管现有联合学习平台(FL)平台已取得了显着的进展,以提供开发基础架构,但这些平台可能无法很好地应对各种异质性带来的挑战,包括参与者本地数据,资源,行为和学习目标中的异质性。为了填补这一空白,在本文中,我们提出了一个名为FederatedScope的新型FL平台,该平台采用事件驱动的架构为用户提供极大的灵活性,以独立描述不同参与者的行为。这样的设计使用户可以轻松地描述参与者具有各种本地培训过程,学习目标和后端,并通过同步或异步培训策略将其协调为FL课程。 FederatedScope为易于使用和灵活的平台提供了丰富类型的插入操作和组件,以有效地进行进一步开发,并且我们实施了几个重要组件,以更好地帮助用户进行隐私保护,攻击模拟和自动调整。我们已经在https://github.com/alibaba/federatedscope上发布了FederatedScope,以在各种情况下促进联邦学习的学术研究和工业部署。
translated by 谷歌翻译
Inspired by the recent success of Transformers for Natural Language Processing and vision Transformer for Computer Vision, many researchers in the medical imaging community have flocked to Transformer-based networks for various main stream medical tasks such as classification, segmentation, and estimation. In this study, we analyze, two recently published Transformer-based network architectures for the task of multimodal head-and-tumor segmentation and compare their performance to the de facto standard 3D segmentation network - the nnU-Net. Our results showed that modeling long-range dependencies may be helpful in cases where large structures are present and/or large field of view is needed. However, for small structures such as head-and-neck tumor, the convolution-based U-Net architecture seemed to perform well, especially when training dataset is small and computational resource is limited.
translated by 谷歌翻译
Recent advances in neural radiance fields have enabled the high-fidelity 3D reconstruction of complex scenes for novel view synthesis. However, it remains underexplored how the appearance of such representations can be efficiently edited while maintaining photorealism. In this work, we present PaletteNeRF, a novel method for photorealistic appearance editing of neural radiance fields (NeRF) based on 3D color decomposition. Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases (i.e., 3D segmentations defined by a group of NeRF-type functions) that are shared across the scene. While our palette-based bases are view-independent, we also predict a view-dependent function to capture the color residual (e.g., specular shading). During training, we jointly optimize the basis functions and the color palettes, and we also introduce novel regularizers to encourage the spatial coherence of the decomposition. Our method allows users to efficiently edit the appearance of the 3D scene by modifying the color palettes. We also extend our framework with compressed semantic features for semantic-aware appearance editing. We demonstrate that our technique is superior to baseline methods both quantitatively and qualitatively for appearance editing of complex real-world scenes.
translated by 谷歌翻译
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io
translated by 谷歌翻译
Causal inference is the process of using assumptions, study designs, and estimation strategies to draw conclusions about the causal relationships between variables based on data. This allows researchers to better understand the underlying mechanisms at work in complex systems and make more informed decisions. In many settings, we may not fully observe all the confounders that affect both the treatment and outcome variables, complicating the estimation of causal effects. To address this problem, a growing literature in both causal inference and machine learning proposes to use Instrumental Variables (IV). This paper serves as the first effort to systematically and comprehensively introduce and discuss the IV methods and their applications in both causal inference and machine learning. First, we provide the formal definition of IVs and discuss the identification problem of IV regression methods under different assumptions. Second, we categorize the existing work on IV methods into three streams according to the focus on the proposed methods, including two-stage least squares with IVs, control function with IVs, and evaluation of IVs. For each stream, we present both the classical causal inference methods, and recent developments in the machine learning literature. Then, we introduce a variety of applications of IV methods in real-world scenarios and provide a summary of the available datasets and algorithms. Finally, we summarize the literature, discuss the open problems and suggest promising future research directions for IV methods and their applications. We also develop a toolkit of IVs methods reviewed in this survey at https://github.com/causal-machine-learning-lab/mliv.
translated by 谷歌翻译
The success of deep learning is partly attributed to the availability of massive data downloaded freely from the Internet. However, it also means that users' private data may be collected by commercial organizations without consent and used to train their models. Therefore, it's important and necessary to develop a method or tool to prevent unauthorized data exploitation. In this paper, we propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners. Specifically, the noise produced by the generator for each image has the confounder property. It can build spurious correlations between images and labels, so that the model cannot learn the correct mapping from images to labels in this noise-added dataset. Meanwhile, the discriminator is used to ensure that the generated noise is small and imperceptible, thereby remaining the normal utility of the encrypted image for humans. The experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets. The results demonstrate that our method not only outperforms state-of-the-art methods in standard settings, but can also be applied to fast encryption scenarios. Moreover, we show a series of transferability and stability experiments to further illustrate the effectiveness and superiority of our method.
translated by 谷歌翻译